|
An intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of man building artificial general intelligence (strong AI). Strong AI would be capable of recursive self-improvement leading to the emergence of superintelligence, the limits of which are unknown. The notion of an "intelligence explosion" was first described by , who speculated on the effects of superhuman machines, should they ever be invented: Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.〔Ehrlich, Paul. (The Dominant Animal: Human Evolution and the Environment )〕 However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.〔(Superbrains born of silicon will change everything. )〕 If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. It could then design an even more capable machine, or re-write its own software to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. == Plausibility == Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. The existence of multiple paths to an intelligence explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.〔(【引用サイトリンク】title=What is the Singularity? | Singularity Institute for Artificial Intelligence )〕 is skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity. Whether or not an intelligence explosion occurs depends on three factors.〔David Chalmers John Locke Lecture, 10 May, Exam Schools, Oxford, (Presenting a philosophical analysis of the possibility of a technological singularity or "intelligence explosion" resulting from recursively self-improving AI ).〕 The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements. There are two logically independent, but mutually reinforcing, accelerating effects: increases in the speed of computation, and improvements to the algorithms used.〔(The Singularity: A Philosophical Analysis, David J. Chalmers )〕 The former is predicted by Moore’s Law and the forecast improvements in hardware,〔(【引用サイトリンク】title=ITRS )〕 and is comparatively similar to previous technological advance. On the other hand, most AI researchers believe that software is more important than hardware. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Intelligence explosion」の詳細全文を読む スポンサード リンク
|